perm filename KNOW[W76,JMC] blob
sn#208550 filedate 1976-03-30 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .require "memo.pub[let,jmc]" source
C00018 00003
C00019 ENDMK
C⊗;
.require "memo.pub[let,jmc]" source;
.every heading (,DRAFT,);
.cb KNOWLEDGE OF TELEPHONE NUMBERS
An intelligent program will have to reason about who knows
what in order to predict what others will do. It will also have
to reason that if it looks a number up in a telephone book, then
it will know the number. The object of this note is to introduce
a formalism suitable for expressing such facts. The formalism
has the following characteristics:
1. The thing known is a term, i.e. a noun phrase like
"telephone(Mike)" or "Mike's telephone number".
This differs from the treatments common in philosophical logic in
two respects. First modal treatments are more popular now than
treatments in which the thing known is a sentence. We will explain
our reasons for our preference later. Second, even when the object
is syntactic, it is a sentence rather than a term
and the meaning is knowing that the sentence
is true. In our case, even when the object is a sentence, the
meaning is knowing its truth value - which may
be false. The relation between these treatments will be explained
later, but ours seems more general.
2. We will restrict ourselves for now to problems in
which the value of the noun phrase is a LISP expression
such as a number or a string. Thus we will avoid
the knotty question of what you know when you say you know
John. However, we will be able to say we know someone's name
or address or telephone number or the truth value of an assertion
about him.
We will use the situation formalism of (McCarthy and Hayes 1970),
and a typical sentence is
%2knows(Pat,"telephone(Mike)",S0)%1
meaning that Pat knows Mike's telephone number in situation ⊗S0.
Suppose we want to argue that Pat can get in conversation with
Mike by looking up his telephone number in the telephone book
and dialing the number he finds.
The ⊗physics of the situation is given by
1) %2∀p1 p2 s. speaking(p1,p2,result(p1,dial(telephone(p2)),s)).%1
which asserts that a situation in which ⊗p1 is speaking to ⊗p2 results
when ⊗p1 dials %2p2%1's telephone number. We are ignoring qualifications
such as ⊗p2 being home that don't interest us for now.
Substituting Pat, Mike and ⊗S0 for ⊗p1, ⊗p2 and ⊗s gives particularized
sentences like
2) ⊗speaking(Pat,Mike,result(Pat,dial(telephone(Mike)),S0)).
Now we want to express the condition that Pat knows Mike's telephone
number. We can't do it by tinkering with 1), because 1) is still
true whether Pat knows Mike's telephone number or not, e.g. he may
dial it under the impression that it is someone else's telephone
number, but he will still find himself talking to Mike. That's
why we called 1) "physics".
Our first step is to introduce two functions - ⊗value(x),
where ⊗x is a term, and ⊗value(p,x,s) where ⊗p is a person, ⊗x
is a term, and ⊗s is a situation. ⊗value(x) is the value of
the term ⊗x, and ⊗value(p,x,s) is the value that person ⊗p
gives to the term ⊗x in situation ⊗s.
Thus %2value(Pat,"2+2",S0)%1 is what Pat thinks 2+2 is in situation ⊗S0.
We can now define
⊗knows(p,x,s), read ⊗p knows ⊗x in situation ⊗s by
3) %2∀p x s.(know(p,x,s) ≡ value(p,x,s) = value(x))%1,
i.e. ⊗p knows ⊗x if and only if its value is what he thinks it is.
⊗value(x) is related to the term ⊗x by a so-called reflexion
principle that gives as axioms all statements like
4) %2value("telephone(Mike)") = telephone(Mike)%1.
Like Tarski's reflexion principle, this has to be built into
the logic and can't be expressed as a single axiom in the
language itself; it can be expressed in a suitable meta-language
such as English, although we have preferred to explain it here
with an example.
Our eventual goal is a statement
5) %2speaking(Pat,Mike,result(Pat,dial(value(Pat,"telephone(Mike)",
result(Pat,lookup("Mike"),S0))),result(Pat,lookup("Mike"),S0))).%1
which says that if Pat dials what he thinks is Mike's telephone
number after looking up Mike (in the phone book), then he will
be speaking to Mike.
This can be derived from 1), a statement expressing the result
of Pat looking up Mike, namely
6) %2value(Pat,"telephone(Mike)",result(Pat,lookup("Mike"),S0))
= value("telephone(Mike)")%1,
and the instance 4) of the reflexion principle.
A problem whose solution requires yet more notation arises
when we want to derive 6) from a more general assertion about the
result of looking up people's telephone numbers. We would like
to say something like
7*) %2∀x y s. knows(x,"telephone(y)",result(x,lookup("y"),s))%1,
and get the desired result by substituting Pat for ⊗x, Mike for ⊗y,
and ⊗S0 for ⊗s. However, this won't work unless we re-interpret
the meaning of the quotes in some non-standard way, such as Quine
sometimes does with his quasi-quotes. If we do that, we will have to
live with it by suitably tinkering with the logic. I prefer not
to tinker with the logic, and instead we introduce an explicit
string substitution function ⊗subst as in LISP (⊗subst(x,y,z) is
the expression that results from substituting the expression ⊗x
for the symbol ⊗y in the expression ⊗z) and an explicit function
giving the name of a person. Thus we have
8) %2name(Mike) = "Mike"%1,
which is useful since we are squeamish about substituting
people into symbolic expressions and prefer to substitute
only their names, (i.e. without actually proving it, we regard
the abuse of language that would be required to interpret
a substitution of Mike as meaning substitute "Mike" as
possibly likely to produce an inconsistency even in our
limited context). We could regard 8) as the result of
a reflexion principle, i.e. as a converse of
9) %2value("Mike") = Mike%1,
but we can also consider modifying it to
8') %2name(Mike) = "Noonan, Michael C."%1.
which might be appropriate in the telephone context.
We can now state our general law of looking up
telephone numbers in the form
10) %2∀x y s.(knows(x,subst(name(y),"y","telephone(y)"),
result(x,lookup(name(y)),s)))%1,
and the derivation proceeds first by substituting Pat for ⊗x,
Mike for ⊗y, and ⊗S0 for ⊗s to get
11) %2knows(Pat,subst(name(Mike),"y","telephone(y)"),
result(Pat,lookup(name(Mike)),S0)))%1.
Using 8) makes this
12) %2knows(Pat,subst("Mike","y","telephone(y)"),
result(Pat,lookup("Mike"),S0)))%1,
and carrying out the indicated substitution (assuming our
logic lets us do this without further ado, e.g. by using
the attachment mechanism in FOL) finally yields
13) %2knows(Pat,"telephone(Mike)",result(Pat,lookup("Mike"),S0))%1.
Admittedly we have introduced a lot of machinery -
⊗know(p,x), ⊗value(x), ⊗value(p,x,s), ⊗name(x), and ⊗subst(x,y,z).
We also need the reflexion principle in the logic and the ability
to make syntactic calculations, e.g. performing the substitution.
On the other hand, we have avoided the logic of modal operators,
possible worlds, and giving rules in the logic permitting
substitutions into opaque constexts. I think the machinery
introduced will be useful for metamathematical reasoning generally.
We have not introduced any logic of ⊗value(p,x,s), and when we
do, we shall have to take care to avoid Montague's paradoxes.
So far we have merely done philosophical logic, and
done it rather cavalierly, i.e. without adequate attention
to the opinions of other people working in the field of knowledge
and belief. Now I want to consider how the goals of AI in
this matter differ from those of philosophy. First, our goal
is not to explicate the ordinary language usage of words and
sentences expressing knowledge or even to determine a correct
concept of knowledge. Our objective should be to develop a
formalism that we can show is adequate to express what an
artificial intelligence must know about who knows what and
how knowledge is to be obtained. We will start on that in
the next section of this paper. Before doing that, however,
I have a few final remarks explaining the choice of formalism.
1. We can try to make equivalences like
14) %2knows(Pat,"telephone(Mike)",S0) = ∃x.K(Pat,subst(x,"x",
"telephone(Mike) = x"))%1,
where ⊗K(person,sentence) means that ⊗person knows ⊗sentence.
This would reduce my definition of knowledge to the case
of knowing the truth of a sentence. I have two objections.
First, it makes the case that I expect to arise most frequently
in AI complicated - the simplest sentence involves ⊗subst.
Second, I am not sure it says the same thing anyway, although
this may be just a philosophical quibble for the initial
applications. Namely, suppose we have
%2K(Pat,"telephone(Mike) = 497-4971",S0)%1.
Does this mean that Pat will say yes when asked about the sentence, or
does it mean that he will answer correctly when asked for Mike's
telephone number or whose number is 497-4971 or even what is
the relation between Mike and 497-4971? I would like to
be able to distinguish these cases, and my first form settles
on the case that Pat will answer correctly when asked Mike's
telephone number.
2. Suppose that we are making a program to deal with
problems that are mostly physical, i.e. mostly we have the facts
and want to find a physical action that will achieve a goal.
However, later it turns out that the program must reason
about how to get some information by asking someone who has it.
I would like to be able to make the extension without overturning
the entire logical system. It seems to me that switching to modal
logic might require this, and the notion of possible worlds seems
dubious to me, especially as a new structure of possible worlds
may be required for each propositional attitude.
Anyway, valid or not, these are some of the reasons for introducing
a different approach. I should remark that knowing telephone
numbers could also be treated directly in a modal way, although
I don't know of any attempt to do so.
.bb Applications to AI
(tune in to the next exciting draft)
.begin verbatim
John McCarthy
Artificial Intelligence Laboratory
Stanford, California 94305
.end
.once indent 0;
{date}